Skip to content

Enable TFLite model parsing with FlatBuffer support and comprehensive TFLite enhancements#146

Open
tdarote wants to merge 4 commits intoqualcomm-linux:mainfrom
tdarote:tflite
Open

Enable TFLite model parsing with FlatBuffer support and comprehensive TFLite enhancements#146
tdarote wants to merge 4 commits intoqualcomm-linux:mainfrom
tdarote:tflite

Conversation

@tdarote
Copy link

@tdarote tdarote commented Jan 30, 2026

This pull request enables TFLite model parsing capabilities by integrating FlatBuffer support and implements comprehensive enhancements to the TFLite recipe.

Key Changes:
-> FlatBuffer Integration for TFLite:
- Added flatbuffer bbappend file to enable TFLite's schema handling capabilities for model parsing

-> Comprehensive TFLite Enhancements:
- Added benchmark_model config option
- Fixed protobuf dependency in benchmark tools
- Added dynamic OpenCL library loading support
- Excluded subdirectories from all builds
- Force delegate symbols from shared library
- Add version support to C API
- Fix label_image dependencies
- Add install rule for C interface shared library

@lumag
Copy link
Contributor

lumag commented Jan 30, 2026

Waiting for the patches to be posted upstream

Copy link
Contributor

@lumag lumag left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please fix commit subject for the flatbuffers patch

@tdarote tdarote force-pushed the tflite branch 2 times, most recently from 175b3f6 to 3ae95c6 Compare February 2, 2026 10:31
@tdarote
Copy link
Author

tdarote commented Feb 2, 2026

Waiting for the patches to be posted upstream

-updated upstream status to submitted

@tdarote tdarote closed this Feb 2, 2026
@tdarote tdarote reopened this Feb 2, 2026
@tdarote
Copy link
Author

tdarote commented Feb 2, 2026

Please fix commit subject for the flatbuffers patch

DONE

@lumag
Copy link
Contributor

lumag commented Feb 2, 2026

Okay. Upstream (meta-oe) uses flatbuffers 25.12.19. To prevent possible issues with other packages which might depend on that version, we need to provide a separate version of the recipe (flatbuffers-tflite.bb, require recipes-devtools/flatbuffers/flatbuffers.bb) and use it for building TFLite.

@tdarote
Copy link
Author

tdarote commented Feb 2, 2026

Okay. Upstream (meta-oe) uses flatbuffers 25.12.19. To prevent possible issues with other packages which might depend on that version, we need to provide a separate version of the recipe (flatbuffers-tflite.bb, require recipes-devtools/flatbuffers/flatbuffers.bb) and use it for building TFLite.

--done

@lumag
Copy link
Contributor

lumag commented Feb 2, 2026

This doesn't build in my test system:

| DEBUG: Python function extend_recipe_sysroot finished
| DEBUG: Executing shell function do_configure
| CMake Error at CMakeLists.txt:23 (project):
|   VERSION ".." format invalid.
|
|
| -- Configuring incomplete, errors occurred!
| WARNING: exit code 1 from a shell command.
ERROR: Task (/home/lumag/Projects/RPB/build-rpb/conf/../../layers/meta-qcom-distro/recipes-ml/tflite/tensorflow-lite_2.20.0.qcom.bb:do_configure) failed with exit code '1'
NOTE: Tasks Summary: Attempted 5751 tasks of which 5578 didn't need to be rerun and 1 failed.

@ricardosalveti
Copy link
Contributor

Please explain why this flatbuffer recipe is needed, why the one from meta-oe is not enough, differences, etc, as part of your git commit message.

@tdarote
Copy link
Author

tdarote commented Feb 3, 2026

Please explain why this flatbuffer recipe is needed, why the one from meta-oe is not enough, differences, etc, as part of your git commit message.

If we are using flatbuffers version from meta-oe then we are facing below config issue as :

| /local/mnt/workspace/tushar/kas-tflite-build/build/tmp/work/armv8-2a-qcom-linux/tensorflow-lite/2.20.0.qcom/sources/tensorflow-lite-2.20.0.qcom/tensorflow/compiler/mlir/lite/schema/schema_generated.h:25:41: error: static assertion failed: Non-compatible flatbuffers version included
| 25 | static_assert(FLATBUFFERS_VERSION_MAJOR == 24 &&
| | ^
| /local/mnt/workspace/tushar/kas-tflite-build/build/tmp/work/armv8-2a-qcom-linux/tensorflow-lite/2.20.0.qcom/sources/tensorflow-lite-2.20.0.qcom/tensorflow/compiler/mlir/lite/schema/schema_generated.h:25:41: note: the comparison reduces to '(25 == 24)'
| ninja: build stopped: subcommand failed.
|
| WARNING: /local/mnt/workspace/tushar/kas-tflite-build/build/tmp/work/armv8-2a-qcom-linux/tensorflow-lite/2.20.0.qcom/temp/run.do_compile.3752035:153 exit 1 from 'eval ${DESTDIR:+DESTDIR=${DESTDIR} }VERBOSE=1 cmake --build '/local/mnt/workspace/tushar/kas-tflite-build/build/tmp/work/armv8-2a-qcom-linux/tensorflow-lite/2.20.0.qcom/build' "$@" -- ${EXTRA_OECMAKE_BUILD}'

@tdarote tdarote requested a review from lumag February 17, 2026 15:01
lumag
lumag previously approved these changes Feb 18, 2026
@tdarote tdarote requested a review from koenkooi February 18, 2026 08:55
Copy link

@quaresmajose quaresmajose left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I apologize for raising more suggestions, but I promise this is the last one from me.

TF_LITE_PATCH = "0"
TF_LITE_BRANCH = "r${TF_LITE_MAJOR}.${TF_LITE_MINOR}"

SRCREV_FORMAT = "tensorflow_farmhash_gemmlowp_cpuinfo_mlDtypes_ruy_openclHeaders_vulkanHeaders_xnnpack_fft2d_fp16_kleidiai_pthreadpool_fxdiv"
Copy link

@quaresmajose quaresmajose Feb 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will be good to keep this also sorted

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As part of the latest commit, TensorFlow (being the main project) is now listed first. All remaining entries are sorted, and the variable ordering is kept consistent with the SRC_URI layout.

git://github.com/ARM-software/kleidiai.git;protocol=https;branch=main;name=kleidiai;destsuffix=${S}/kleidiai \
git://github.com/google/pthreadpool.git;protocol=https;branch=main;name=pthreadpool;destsuffix=${S}/pthreadpool \
git://github.com/Maratyszcza/FXdiv.git;protocol=https;branch=master;name=fxdiv;destsuffix=${S}/FXdiv \
https://www.apache.org/licenses/LICENSE-2.0.txt \

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We dont need to fectch the apache licenses because they are already available in oe-core at ${COMMON_LICENSE_DIR}/Apache-2.0 And we can copy within the do_configure like you did.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, addressed. Since the Apache‑2.0 license is already available under ${COMMON_LICENSE_DIR}/Apache-2.0, there’s no need to fetch it separately. I’ve updated the recipe to simply copy it during do_configure, consistent with how other recipes handle common licenses.

# Command used: sha256sum LICENSE-2.0.txt
# This is included for informational purposes only
# The actual source verification happens through git commit hashes in SRCREV values
SRC_URI[sha256sum] = "cfc7749b96f63bd31c3c42b5c471bf756814053e847c10f3eb003417bc523d30"
Copy link

@quaresmajose quaresmajose Feb 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the ${COMMON_LICENSE_DIR}/Apache-2.0 that will be droped

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

ln -sf ${S}/fft2d/src/fft2d/fft2d fft2d

mkdir -p opengl_headers
cp ${WORKDIR}/sources/LICENSE-2.0.txt opengl_headers/opengl_headers_LICENSE.txt

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we can copy from oe-core license ${COMMON_LICENSE_DIR}/Apache-2.0

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

cp ${WORKDIR}/sources/LICENSE-2.0.txt opengl_headers/opengl_headers_LICENSE.txt

mkdir -p egl_headers
cp ${WORKDIR}/sources/LICENSE-2.0.txt egl_headers/egl_headers_LICENSE.txt

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@tdarote
Copy link
Author

tdarote commented Feb 18, 2026

I apologize for raising more suggestions, but I promise this is the last one from me.

No worries — thanks for all the suggestions. I’ve addressed all.

@quaresmajose
Copy link

I apologize for raising more suggestions, but I promise this is the last one from me.

No worries — thanks for all the suggestions. I’ve addressed all.

Thanks

lumag
lumag previously approved these changes Feb 18, 2026
@@ -0,0 +1,271 @@
#!/usr/bin/env python3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✖ source-license-headers-exist: The first 200 lines do not contain the pattern(s): Copyright(\s)*(\(c\)|©)?, SPDX-License-Identifier|Redistribution and use in source and binary forms, with or without (recipes-ml/tflite/files/extract_tflite_srcrevs_from_github.py)

We need a license definition for this file.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated license info in file, can you please check once

@koenkooi
Copy link
Contributor

Trying to build yesterdays version of this PR for RB1 results in this:

| NOTE: DESTDIR=/build/tmp/work/cortexa53-qcom-linux/tensorflow-lite/2.20.0/image VERBOSE=1 cmake --install /build/tmp/work/cortexa53-qcom-linux/tensorflow-lite/2.20.0/build
| -- Install configuration: ""
| CMake Error at cmake_install.cmake:57 (file):
|   file INSTALL cannot find
|   "/build/tmp/work/cortexa53-qcom-linux/tensorflow-lite/2.20.0/build/libtensorflowlite_c.so.2.20.0":
|   No such file or directory.

Checking the build folder, only the static library is present.

@tdarote
Copy link
Author

tdarote commented Feb 19, 2026

Trying to build yesterdays version of this PR for RB1 results in this:

| NOTE: DESTDIR=/build/tmp/work/cortexa53-qcom-linux/tensorflow-lite/2.20.0/image VERBOSE=1 cmake --install /build/tmp/work/cortexa53-qcom-linux/tensorflow-lite/2.20.0/build
| -- Install configuration: ""
| CMake Error at cmake_install.cmake:57 (file):
|   file INSTALL cannot find
|   "/build/tmp/work/cortexa53-qcom-linux/tensorflow-lite/2.20.0/build/libtensorflowlite_c.so.2.20.0":
|   No such file or directory.

Checking the build folder, only the static library is present.

can you check now ?

@tdarote
Copy link
Author

tdarote commented Feb 19, 2026

Hi @lumag,
We noticed that protobuf-camx is being used by le-camera-service, and since both TFLite and le-camera-service will become dependencies for QIMSDK, we are concerned that integration with the camera stack may introduce conflicts.
TFLite builds fine independently, but when combined with the camera service, using different protobuf variants could potentially cause issues.
To avoid this, can we also switch TFLite to use protobuf-camx instead of the upstream protobuf?

@koenkooi
Copy link
Contributor

Trying to build yesterdays version of this PR for RB1 results in this:

| NOTE: DESTDIR=/build/tmp/work/cortexa53-qcom-linux/tensorflow-lite/2.20.0/image VERBOSE=1 cmake --install /build/tmp/work/cortexa53-qcom-linux/tensorflow-lite/2.20.0/build
| -- Install configuration: ""
| CMake Error at cmake_install.cmake:57 (file):
|   file INSTALL cannot find
|   "/build/tmp/work/cortexa53-qcom-linux/tensorflow-lite/2.20.0/build/libtensorflowlite_c.so.2.20.0":
|   No such file or directory.

Checking the build folder, only the static library is present.

can you check now ?

It works now, thanks!

koenkooi
koenkooi previously approved these changes Feb 19, 2026
@ievlogie
Copy link

Hi @lumag, We noticed that protobuf-camx is being used by le-camera-service, and since both TFLite and le-camera-service will become dependencies for QIMSDK, we are concerned that integration with the camera stack may introduce conflicts. TFLite builds fine independently, but when combined with the camera service, using different protobuf variants could potentially cause issues. To avoid this, can we also switch TFLite to use protobuf-camx instead of the upstream protobuf?

Each AI framework and CamX currently rely on different versions of protobuf, leading to inconsistencies across the stack. Attempts to align the protobuf versions for LiteRT and TFLite resulted in functional regressions, indicating that modifying individual AI frameworks to support alternative versions is not a sustainable solution. A similar issue exists with FlatBuffer, where version mismatches create comparable challenges.
To address this, a unified and scalable approach is required for all use cases, including CamX, LiteRT, TFLite, LlamaCPP, ONNX Runtime, and ExecuTorch.
The solution that currently works reliably is the introduction of the protobuf-litert.bb recipe, which generates a dedicated protoc-litert binary. This binary is then explicitly provided to LiteRT for its build process. The same model should be adopted for other dependencies and AI frameworks to ensure consistent behavior and avoid cross‑framework version conflicts.

@quaresmajose
Copy link

Hi @lumag, We noticed that protobuf-camx is being used by le-camera-service, and since both TFLite and le-camera-service will become dependencies for QIMSDK, we are concerned that integration with the camera stack may introduce conflicts. TFLite builds fine independently, but when combined with the camera service, using different protobuf variants could potentially cause issues. To avoid this, can we also switch TFLite to use protobuf-camx instead of the upstream protobuf?

Each AI framework and CamX currently rely on different versions of protobuf, leading to inconsistencies across the stack. Attempts to align the protobuf versions for LiteRT and TFLite resulted in functional regressions, indicating that modifying individual AI frameworks to support alternative versions is not a sustainable solution. A similar issue exists with FlatBuffer, where version mismatches create comparable challenges. To address this, a unified and scalable approach is required for all use cases, including CamX, LiteRT, TFLite, LlamaCPP, ONNX Runtime, and ExecuTorch. The solution that currently works reliably is the introduction of the protobuf-litert.bb recipe, which generates a dedicated protoc-litert binary. This binary is then explicitly provided to LiteRT for its build process. The same model should be adopted for other dependencies and AI frameworks to ensure consistent behavior and avoid cross‑framework version conflicts.

Having a fork of the protobuf library for each component is still not scalable and makes maintaining such a solution cumbersome. The best approach would be to use static linking each component

@ievlogie
Copy link

Hi @lumag, We noticed that protobuf-camx is being used by le-camera-service, and since both TFLite and le-camera-service will become dependencies for QIMSDK, we are concerned that integration with the camera stack may introduce conflicts. TFLite builds fine independently, but when combined with the camera service, using different protobuf variants could potentially cause issues. To avoid this, can we also switch TFLite to use protobuf-camx instead of the upstream protobuf?

Each AI framework and CamX currently rely on different versions of protobuf, leading to inconsistencies across the stack. Attempts to align the protobuf versions for LiteRT and TFLite resulted in functional regressions, indicating that modifying individual AI frameworks to support alternative versions is not a sustainable solution. A similar issue exists with FlatBuffer, where version mismatches create comparable challenges. To address this, a unified and scalable approach is required for all use cases, including CamX, LiteRT, TFLite, LlamaCPP, ONNX Runtime, and ExecuTorch. The solution that currently works reliably is the introduction of the protobuf-litert.bb recipe, which generates a dedicated protoc-litert binary. This binary is then explicitly provided to LiteRT for its build process. The same model should be adopted for other dependencies and AI frameworks to ensure consistent behavior and avoid cross‑framework version conflicts.

Having a fork of the protobuf library for each component is still not scalable and makes maintaining such a solution cumbersome. The best approach would be to use static linking each component

CamX is provided directly by Qualcomm, so its dependency alignment is generally not a concern. However, all other AI frameworks in the stack originate from external sources, which introduces additional constraints and makes version harmonization more challenging.

@quaresmajose
Copy link

Hi @lumag, We noticed that protobuf-camx is being used by le-camera-service, and since both TFLite and le-camera-service will become dependencies for QIMSDK, we are concerned that integration with the camera stack may introduce conflicts. TFLite builds fine independently, but when combined with the camera service, using different protobuf variants could potentially cause issues. To avoid this, can we also switch TFLite to use protobuf-camx instead of the upstream protobuf?

Each AI framework and CamX currently rely on different versions of protobuf, leading to inconsistencies across the stack. Attempts to align the protobuf versions for LiteRT and TFLite resulted in functional regressions, indicating that modifying individual AI frameworks to support alternative versions is not a sustainable solution. A similar issue exists with FlatBuffer, where version mismatches create comparable challenges. To address this, a unified and scalable approach is required for all use cases, including CamX, LiteRT, TFLite, LlamaCPP, ONNX Runtime, and ExecuTorch. The solution that currently works reliably is the introduction of the protobuf-litert.bb recipe, which generates a dedicated protoc-litert binary. This binary is then explicitly provided to LiteRT for its build process. The same model should be adopted for other dependencies and AI frameworks to ensure consistent behavior and avoid cross‑framework version conflicts.

Having a fork of the protobuf library for each component is still not scalable and makes maintaining such a solution cumbersome. The best approach would be to use static linking each component

CamX is provided directly by Qualcomm, so its dependency alignment is generally not a concern. However, all other AI frameworks in the stack originate from external sources, which introduces additional constraints and makes version harmonization more challenging.

In my opinion the ones provided in binary form by Qualcomm are the problematic ones because they will be constantly colliding with the open versions as soon as these are updated.
All other open-source frameworks compiled in the layer will use the same libraries version. It may be challenging, but they will have to share the same library version available upstream.

@ievlogie
Copy link

Hi @lumag, We noticed that protobuf-camx is being used by le-camera-service, and since both TFLite and le-camera-service will become dependencies for QIMSDK, we are concerned that integration with the camera stack may introduce conflicts. TFLite builds fine independently, but when combined with the camera service, using different protobuf variants could potentially cause issues. To avoid this, can we also switch TFLite to use protobuf-camx instead of the upstream protobuf?

Each AI framework and CamX currently rely on different versions of protobuf, leading to inconsistencies across the stack. Attempts to align the protobuf versions for LiteRT and TFLite resulted in functional regressions, indicating that modifying individual AI frameworks to support alternative versions is not a sustainable solution. A similar issue exists with FlatBuffer, where version mismatches create comparable challenges. To address this, a unified and scalable approach is required for all use cases, including CamX, LiteRT, TFLite, LlamaCPP, ONNX Runtime, and ExecuTorch. The solution that currently works reliably is the introduction of the protobuf-litert.bb recipe, which generates a dedicated protoc-litert binary. This binary is then explicitly provided to LiteRT for its build process. The same model should be adopted for other dependencies and AI frameworks to ensure consistent behavior and avoid cross‑framework version conflicts.

Having a fork of the protobuf library for each component is still not scalable and makes maintaining such a solution cumbersome. The best approach would be to use static linking each component

CamX is provided directly by Qualcomm, so its dependency alignment is generally not a concern. However, all other AI frameworks in the stack originate from external sources, which introduces additional constraints and makes version harmonization more challenging.

In my opinion the ones provided in binary form by Qualcomm are the problematic ones because they will be constantly colliding with the open versions as soon as these are updated. All other open-source frameworks compiled in the layer will use the same libraries version. It may be challenging, but they will have to share the same library version available upstream.

The issue in this patch is that it introduces a separate FlatBuffers version exclusively for TFLite. This is required because TFLite is not compatible with the FlatBuffers version provided by the latest Yocto release. The same incompatibility exists for Protobuf and FlatBuffers in LiteRT and LlamaCPP, and other AI frameworks have not yet been evaluated.
Regarding CamX, its Protobuf definitions are part of the LE Camera Services codebase, which is fully under our control. Any incompatibilities there can be resolved quickly, and the introduction of a new Protobuf recipe is not driven by LE Camera Services. At present, the latest LE Camera Server code compiles successfully using the Protobuf version included in Yocto.

…nd C API

Apply multiple patches to enhance TensorFlow Lite functionality:
- Add benchmark_model config option
- Fix protobuf dependency in benchmark tools
- Add dynamic OpenCL library loading support
- Exclude subdirectories from all builds
- Force delegate symbols from shared library
- Add version support to C API
- Fix label_image dependencies
- Add install rule for C interface shared library

Signed-off-by: Tushar Darote <tdarote@qti.qualcomm.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants

Comments